Goto

Collaborating Authors

 assumption 1


One-step differentiation of iterative algorithms

Neural Information Processing Systems

For iterative algorithms, implicit differentiation alleviates this issue but requires custom implementation of Jacobian evaluation. In this paper, we study one-step differentiation, also known as Jacobian-free backpropagation, a method as easy as automatic differentiation and as efficient as implicit differentiation for fast algorithms (e.g., superlinear



A Proofs

Neural Information Processing Systems

Further taking the usual assumption that X is compact. Let us start with Proposition 3, a central observation needed in Theorem 2. Put into words Now, we can proceed to prove the universality part of Theorem 2. Since the task admits a smooth separator, By Fubini's theorem and Proposition 3, we have F The reader can think of λ as a uniform distribution over G. (as in Theorem 2). The result follows directly from the combination of de Finetti's theorem [ Combining this with Kallenberg's noise transfer theorem we have that the weights and Assumption 1 or ii) is an inner-product decision graph problem as in Definition 3. Further, the task has infinitely (as in Theorem 2). Finally, we follow Proposition 2's proof by simply replacing de Finetti's with Aldous-Hoover's theorem. Define an RLC that samples the linear coefficients as follows.